42 research outputs found

    Deflation for semismooth equations

    Full text link
    Variational inequalities can in general support distinct solutions. In this paper we study an algorithm for computing distinct solutions of a variational inequality, without varying the initial guess supplied to the solver. The central idea is the combination of a semismooth Newton method with a deflation operator that eliminates known solutions from consideration. Given one root of a semismooth residual, deflation constructs a new problem for which a semismooth Newton method will not converge to the known root, even from the same initial guess. This enables the discovery of other roots. We prove the effectiveness of the deflation technique under the same assumptions that guarantee locally superlinear convergence of a semismooth Newton method. We demonstrate its utility on various finite- and infinite-dimensional examples drawn from constrained optimization, game theory, economics and solid mechanics.Comment: 24 pages, 3 figure

    Asymptotic Consistency for Nonconvex Risk-Averse Stochastic Optimization with Infinite Dimensional Decision Spaces

    Full text link
    Optimal values and solutions of empirical approximations of stochastic optimization problems can be viewed as statistical estimators of their true values. From this perspective, it is important to understand the asymptotic behavior of these estimators as the sample size goes to infinity. This area of study has a long tradition in stochastic programming. However, the literature is lacking consistency analysis for problems in which the decision variables are taken from an infinite dimensional space, which arise in optimal control, scientific machine learning, and statistical estimation. By exploiting the typical problem structures found in these applications that give rise to hidden norm compactness properties for solution sets, we prove consistency results for nonconvex risk-averse stochastic optimization problems formulated in infinite dimensional space. The proof is based on several crucial results from the theory of variational convergence. The theoretical results are demonstrated for several important problem classes arising in the literature.Comment: 24 page

    Proximal Galerkin: A structure-preserving finite element method for pointwise bound constraints

    Full text link
    The proximal Galerkin finite element method is a high-order, low iteration complexity, nonlinear numerical method that preserves the geometric and algebraic structure of bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free boundary problems, enforce discrete maximum principles, and develop scalable, mesh-independent algorithms for optimal design. The paper leads to a derivation of the latent variable proximal point (LVPP) algorithm: an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of the main benefits of this algorithm is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of semilinear partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and certain infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis

    Optimal Control of the Landau-de Gennes Model of Nematic Liquid Crystals

    Full text link
    We present an analysis and numerical study of an optimal control problem for the Landau-de Gennes (LdG) model of nematic liquid crystals (LCs), which is a crucial component in modern technology. They exhibit long range orientational order in their nematic phase, which is represented by a tensor-valued (spatial) order parameter Q=Q(x)Q = Q(x). Equilibrium LC states correspond to QQ functions that (locally) minimize an LdG energy functional. Thus, we consider an L2L^2-gradient flow of the LdG energy that allows for finding local minimizers and leads to a semi-linear parabolic PDE, for which we develop an optimal control framework. We then derive several a priori estimates for the forward problem, including continuity in space-time, that allow us to prove existence of optimal boundary and external ``force'' controls and to derive optimality conditions through the use of an adjoint equation. Next, we present a simple finite element scheme for the LdG model and a straightforward optimization algorithm. We illustrate optimization of LC states through numerical experiments in two and three dimensions that seek to place LC defects (where Q(x)=0Q(x) = 0) in desired locations, which is desirable in applications.Comment: 26 pages, 9 figure

    On quantitative stability in infinite-dimensional optimization under uncertainty

    Get PDF
    The vast majority of stochastic optimization problems require the approximation of the underlying probability measure, e.g., by sampling or using observations. It is therefore crucial to understand the dependence of the optimal value and optimal solutions on these approximations as the sample size increases or more data becomes available. Due to the weak convergence properties of sequences of probability measures, there is no guarantee that these quantities will exhibit favorable asymptotic properties. We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on PDE-constrained optimization as well as functional data analysis. For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions. In both cases, we make use of the method of probability metrics. The optimal values are shown to be Lipschitz continuous with respect to a minimal information metric and consequently, under further regularity assumptions, with respect to certain Fortet-Mourier and Wasserstein metrics. We prove that even in the most favorable setting, the solutions are at best Hölder continuous with respect to changes in the underlying measure. The theoretical results are tested in the context of Monte Carlo approximation for a numerical example involving PDE-constrained optimization under uncertainty.Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Peer Reviewe

    A semismooth Newton method with analytical path-following for the H1-projection onto the Gibbs simplex

    Get PDF
    An efficient, function-space-based second-order method for the H1-projection onto the Gibbs-simplex is presented. The method makes use of the theory of semismooth Newton methods in function spaces as well as Moreau-Yosida regularization and techniques from parametric optimization. A path-following technique is considered for the regularization parameter updates. A rigorous first and second-order sensitivity analysis of the value function for the regularized problem is provided to justify the update scheme. The viability of the algorithm is then demonstrated for two applications found in the literature: binary image inpainting and labeled data classification. In both cases, the algorithm exhibits meshindependent behavior

    Uncertainty Quantification in Image Segmentation Using the Ambrosio–Tortorelli Approximation of the Mumford–Shah Energy

    Get PDF
    The quantification of uncertainties in image segmentation based on the Mumford–Shah model is studied. The aim is to address the error propagation of noise and other error types in the original image to the restoration result and especially the reconstructed edges (sharp image contrasts). Analytically, we rely on the Ambrosio–Tortorelli approximation and discuss the existence of measurable selections of its solutions as well as sampling-based methods and the limitations of other popular methods. Numerical examples illustrate the theoretical findings

    Uncertainty quantification in image segmentation using the Ambrosio--Tortorelli approximation of the Mumford--Shah energy

    Get PDF
    The quantification of uncertainties in image segmentation based on the Mumford-Shah model is studied. The aim is to address the error propagation of noise and other error types in the original image to the restoration result and especially the reconstructed edges (sharp image contrasts). Analytically, we rely on the Ambrosio-Tortorelli approximation and discuss the existence of measurable selections of its solutions as well as sampling-based methods and the limitations of other popular methods. Numerical examples illustrate the theoretical findings

    Risk-averse optimal control of random elliptic VIs

    Get PDF
    We consider a risk-averse optimal control problem governed by an elliptic variational inequality (VI) subject to random inputs. By deriving KKT-type optimality conditions for a penalised and smoothed problem and studying convergence of the stationary points with respect to the penalisation parameter, we obtain two forms of stationarity conditions. The lack of regularity with respect to the uncertain parameters and complexities induced by the presence of the risk measure give rise to new challenges unique to the stochastic setting. We also propose a path-following stochastic approximation algorithm using variance reduction techniques and demonstrate the algorithm on a modified benchmark problem
    corecore